由于医学图像的数据稀缺性和数据异质性是普遍存在的,因此在部署到新站点时,使用先前的归一化方法训练有素的卷积神经网络(CNN)可能会表现不佳。但是,现实世界应用程序的可靠模型应该能够在分布(IND)和分布(OOD)数据(例如新站点数据)上很好地概括。在这项研究中,我们提出了一种称为窗口归一化(WIN)的新型归一化技术,这是现有标准化方法的简单而有效的替代方法。具体而言,赢得了与特征窗口上计算的本地统计数据的归一化统计数据。此功能级增强技术可以很好地规范模型,并显着改善了其OOD的概括。利用它的优势,我们提出了一种称为Win Win的新型自我鉴定方法,以进一步改善分类中的OOD概括。通过两次向前传球和一致性约束可以轻松实现双赢,这对于现有方法来说是一个简单的扩展。关于各种任务(例如青光眼检测,乳腺癌检测,染色体分类,视盘和杯赛分割等)和数据集(26个数据集)的广泛实验结果证明了我们方法的一般性和有效性。该代码可从https://github.com/joe1chief/windownormalizaion获得。
translated by 谷歌翻译
多标签图像识别是一个基本又实用的任务,因为真实世界的图像固有地拥有多个语义标签。然而,由于输入图像和输出标签空间的复杂性,难以收集大规模的多标签注释。为了降低注释成本,我们提出了一种结构化语义传输(SST)框架,使得能够培训具有部分标签的多标签识别模型,即,仅在每个图像中丢失其他标签(也称为未知标签)。该框架由两个互补传输模块组成,探索图像内和交叉图像语义相关性,以传输已知标签的知识,以为未知标签生成伪标签。具体地,一个图像内语义传输模块学习特定于图像的标签共出矩阵,并将已知的标签映射到基于该矩阵的补充未知标签。同时,交叉图像传输模块学习特定于类别的特征相似性,并帮助您具有高相似之处的补充未知标签。最后,已知和生成的标签都用于训练多标签识别模型。对Microsoft Coco,Visual Genome和Pascal VOC数据集的广泛实验表明,所提出的SST框架在当前最先进的算法上获得了卓越的性能。代码可用于\ url {https:/github.com/hcplab-sysu/sst-ml -pl
translated by 谷歌翻译
为了解决不同面部表情识别(FER)数据集之间的数据不一致的问题,近年来许多跨域FER方法(CD-FERS)已被广泛设计。虽然每个声明要实现卓越的性能,但由于源/目标数据集和特征提取器的不一致选择,缺乏公平的比较。在这项工作中,我们首先分析了这些不一致的选择造成的性能效果,然后重新实施了一些良好的CD-FER和最近发布的域适应算法。我们确保所有这些算法采用相同的源数据集和特征提取器,以便进行公平CD-FER评估。我们发现大多数主要的领先算法使用对抗性学习来学习整体域的不变功能来缓解域移位。然而,这些算法忽略了局部特征,这些功能在不同的数据集中更可转换,并为细粒度适应提供更详细的内容。为了解决这些问题,我们通过开发新的对抗图表示适应(AGRA)框架,将图形表示传播与对抗域整体局部特征共同适应的对抗。具体地,它首先构建两个图形,以分别在每个域内和跨不同的域内相关的全部和局部区域。然后,它从输入图像中提取整体本地特征,并使用可学习的每类统计分布来初始化相应的图形节点。最后,采用两个堆叠的图形卷积网络(GCNS)在每个域内传播全部本地功能,以探索它们的交互和整体域的不同域,用于全部局部功能共同适应。我们对几个流行的基准进行了广泛和公平的评估,并表明建议的AGRA框架优于以前的最先进的方法。
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Supervised Question Answering systems (QA systems) rely on domain-specific human-labeled data for training. Unsupervised QA systems generate their own question-answer training pairs, typically using secondary knowledge sources to achieve this outcome. Our approach (called PIE-QG) uses Open Information Extraction (OpenIE) to generate synthetic training questions from paraphrased passages and uses the question-answer pairs as training data for a language model for a state-of-the-art QA system based on BERT. Triples in the form of <subject, predicate, object> are extracted from each passage, and questions are formed with subjects (or objects) and predicates while objects (or subjects) are considered as answers. Experimenting on five extractive QA datasets demonstrates that our technique achieves on-par performance with existing state-of-the-art QA systems with the benefit of being trained on an order of magnitude fewer documents and without any recourse to external reference data sources.
translated by 谷歌翻译
Transformer has achieved impressive successes for various computer vision tasks. However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.g., ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images. Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights significantly degrades while transferring the weights to medical image processing tasks. In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone. Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning. Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a different perturbation. To maximally excavate the impact of Transformer from limited medical data, we propose an auxiliary difficulty ranking task. The Transformer is enforced to identify which branch (i.e., online/target) is processing the more difficult perturbed tokens. Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations. The proposed BOLT is evaluated on three medical image processing tasks, i.e., skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading. The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pretrained weights and state-of-the-art self-supervised learning approaches.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
Digital engineering transformation is a crucial process for the engineering paradigm shifts in the fourth industrial revolution (4IR), and artificial intelligence (AI) is a critical enabling technology in digital engineering transformation. This article discusses the following research questions: What are the fundamental changes in the 4IR? More specifically, what are the fundamental changes in engineering? What is digital engineering? What are the main uncertainties there? What is trustworthy AI? Why is it important today? What are emerging engineering paradigm shifts in the 4IR? What is the relationship between the data-intensive paradigm and digital engineering transformation? What should we do for digitalization? From investigating the pattern of industrial revolutions, this article argues that ubiquitous machine intelligence (uMI) is the defining power brought by the 4IR. Digitalization is a condition to leverage ubiquitous machine intelligence. Digital engineering transformation towards Industry 4.0 has three essential building blocks: digitalization of engineering, leveraging ubiquitous machine intelligence, and building digital trust and security. The engineering design community at large is facing an excellent opportunity to bring the new capabilities of ubiquitous machine intelligence and trustworthy AI principles, as well as digital trust, together in various engineering systems design to ensure the trustworthiness of systems in Industry 4.0.
translated by 谷歌翻译
Surgical robot automation has attracted increasing research interest over the past decade, expecting its huge potential to benefit surgeons, nurses and patients. Recently, the learning paradigm of embodied AI has demonstrated promising ability to learn good control policies for various complex tasks, where embodied AI simulators play an essential role to facilitate relevant researchers. However, existing open-sourced simulators for surgical robot are still not sufficiently supporting human interactions through physical input devices, which further limits effective investigations on how human demonstrations would affect policy learning. In this paper, we study human-in-the-loop embodied intelligence with a new interactive simulation platform for surgical robot learning. Specifically, we establish our platform based on our previously released SurRoL simulator with several new features co-developed to allow high-quality human interaction via an input device. With these, we further propose to collect human demonstrations and imitate the action patterns to achieve more effective policy learning. We showcase the improvement of our simulation environment with the designed new features and tasks, and validate state-of-the-art reinforcement learning algorithms using the interactive environment. Promising results are obtained, with which we hope to pave the way for future research on surgical embodied intelligence. Our platform is released and will be continuously updated in the website: https://med-air.github.io/SurRoL/
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译